perm filename FOREWO[S90,JMC] blob
sn#883739 filedate 1990-04-16 generic text, type C, neo UTF8
COMMENT ā VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 %forewo[s90,jmc] Forward to Parallel Lisp: Languages and Systems
C00010 00003 This page is the version that was emailed to Ito April 16 and
C00018 ENDMK
Cā;
%forewo[s90,jmc] Forward to Parallel Lisp: Languages and Systems
rpg,jjw
Am I truthful?
Ito asked me to write a Foreword for the proceedings of the
Sendai workshop. Here's a draft. It contains some remarks
about history of parallelism about which I haven't had time
to verify by the necessary library work. Is there anything
that contradicts what you know?
Since computers were first invented, it has been known
that serial computation has limits that can be far exceeded by
using parallel computation. Even very early computers used
parallelism in carrying out arithmetic operations, and improved
hardware has expanded this kind of parallelism.
The first project to build a parallel computer was
probably Illiac 4 proposed by the early 1960s. It was
over-elaborate, the cellular automaton influenced design made it
almost immune to programming, and by the time it was working, it
had been over-run by the Cray I, and ordinary serial computer
with added vector facilities.
Parallel computing poses a harsh dilemma for the system
designer. The largest number of arithmetic operations per second
is obtained by designs that offer very limited communication
among the processors. If the problem fits such a design, it can
run very fast, but for many kinds problems, effective parallelism
cannnot be obtained without good communication. Designs offering
the best communication, e.g. fully shared full-speed memory,
cannot compute as fast other designs and don't scale easily to
very large numbers of processors. Ingenuity sometimes provides
unexpected solutions, but sometimes it seems that no amount of
ingenuity will substitute for shared memory.
The largest numerical computations are those involving
partial differential equations. When these are replaced by
difference equations in the most obvious ways, they seem to lend
themselves to regular arrays of processors. However, as soon as
shock waves require concentrating the computation on dynamically
selected parts of space, and radiation propagates influences at
the speed of light, the most obvious grids waste computation.
The idea of queue-based multi-processing arose in the
early 1960s, but support was not offered for actually
implementing it. The ideas is that processes that generate
subprocesses that can be done in parallel dynamically put the
subtasks in a queue structure and that processors take tasks from
this structure when they become free. On the one hand, queue
based multi-processing seems to require a shared memory, which is
expensive. On the other hand, it offers straightforward ways of
programming almost any kind of problem using techniques that
aren't far from those used in programming for serial computers.
Moreover, the programs produced don't depend on the number of
processors, which can even change dynamically. The languages
needed are just the usual serial languages augmented by a few
constructions for declaring parallelism.
Queue-based multi-processing is particularly well suited
for symbolic computation, where the same recursive process may
involve data structures of similar structure and of enormously
varied size, and where the data structures are dynamically
determined. Lisp can be made into a parallel language in a
variety of ways without distorting its character. Moreover, many
Lisp programs written for serial machines can be made to take
advantage of parallelism of this kind. Putting Lisp programs on
parallel machines based on the idea of a cellular automaton is
problematical, and if a solution is found for a particular
program, it is likely to be strongly configuration dependent.
Projects to build parallel Lisp systems in the form of
compilers and interpreters for existing or announced shared
memory multi- processors began in the middle 1980s and have
proceeded uneventfully. It seems to be a straightforward task
whenever the necessary resources can be assembled and maintained.
The initial proposals for parallel constructs were similar to
each other, and my original idea in proposing the workshop
reported in these papers was that it would be a standardization
conference, and on the basis of some experience with the parallel
constructs, a proposal could be made for the incorporation of
parallelism into Common Lisp. Unfortunately, it seems that the
field of parallel Lisp is not quite ready for standardization. I
hope the idea will be pursued in a future meeting.
The present workshop is about the first in which
extensive experience in actually implementing and using the
parallel constructs is extensively reported. The approaches
taken are adequately introduced in the Preface.
This page is the version that was emailed to Ito April 16 and
benefitted from suggestions by Dick Gabriel and Joe Weening.
Since computers were first invented, it has been known
that serial computation has limits that can be far exceeded by
using parallel computation. Even very early computers used
parallelism in carrying out arithmetic operations, and improved
hardware has expanded this kind of parallelism.
The first project to build a parallel computer was probably
Illiac 4 proposed by the early 1960s. It was over-elaborate, the
cellular automaton influenced design made it almost immune to
programming, and by the time it was working, it had been over-run by
the Cray I, and other ordinary serial computers with added vector
facilities and pipelining.
Parallel computing poses a harsh dilemma for the system
designer. The largest number of arithmetic operations per second is
obtained by designs that offer very limited communication among the
processors. If the problem fits such a design, it can run very fast,
but for many kinds of problem, effective parallelism cannnot be obtained
without good communication. Designs offering the best communication,
e.g. fully shared full-speed memory, cannot compute as fast as other
designs and don't scale easily to very large numbers of processors.
Ingenuity sometimes provides unexpected solutions, but sometimes it
seems that no amount of ingenuity will substitute for shared memory.
The largest numerical computations are those involving partial
differential equations. When these are replaced by difference
equations in the most obvious ways, they seem to lend themselves to
regular arrays of processors. However, as soon as shock waves require
concentrating the computation on dynamically selected parts of space,
and radiation propagates influences at the speed of light, the most
obvious grids waste computation.
The idea of queue-based multiprocessing arose in the early
1960s, but support was not offered for actually implementing it. The
idea is that processes can dynamically generate subprocesses that can
be done in parallel, and these subtasks are put in a queue structure
from which processors take tasks when they become free. On the one
hand, queue based multiprocessing seems to require a shared memory,
which is expensive. On the other hand, it offers straightforward ways
of programming almost any kind of problem using techniques that aren't
far from those used in programming for serial computers. Moreover,
the programs produced don't depend on the number of processors, which
can even change dynamically. The languages needed are just the usual
serial languages augmented by a few constructions for declaring
parallelism.
Queue-based multiprocessing is particularly well suited
for symbolic computation, where the same recursive process may
involve data structures of similar structure but of enormously
varied size, and where the data structures are dynamically
determined. Lisp can be made into a parallel language in a
variety of ways without distorting its character. Moreover, many
Lisp programs written for serial machines can be made to take
advantage of parallelism of this kind. Putting Lisp programs on
parallel machines based on the idea of a cellular automaton is
problematical, and if a solution is found for a particular
program, it is likely to be strongly configuration dependent.
Projects to build parallel Lisp systems in the form of
compilers and interpreters for existing or announced shared
memory multiprocessors began in the middle 1980s and have
proceeded uneventfully. It seems to be a straightforward task
whenever the necessary resources can be assembled and maintained.
The initial proposals for parallel constructs were similar to
each other. In fact my original idea in proposing the workshop
reported in these papers was that it would be a standardization
conference, and on the basis of some experience with the parallel
constructs, a proposal could be made for the incorporation of
parallelism into Common Lisp. Unfortunately, it seems that the
field of parallel Lisp is not quite ready for standardization. I
hope standardization will be pursued in a future meeting.
The present workshop is about the first in which extensive
experience in actually implementing and using the parallel constructs
is extensively reported. The approaches taken are adequately
introduced in the Preface.
It seems to me that both queue-based multi-processing and
systems with weaker communication are destined to survive and
will be suitable for different kinds of application. Queue-based
multi-procesing will provide general and straightforward
facilities of all kinds of work, but some kinds of program will
compute faster on more specialized systems.